Commit 73ccd696 authored by Dave Airlie's avatar Dave Airlie

Merge branch 'drm-next-3.9' of git://people.freedesktop.org/~agd5f/linux into drm-next

Alex writes:
- CS ioctl cleanup and unification.  Unification of a lot of functionality
that was duplicated across multiple generates of hardware.
- Add support for Oland GPUs
- Deprecate UMS support.  Mesa and the ddx dropped support for UMS and
apparently very few people still use it since the UMS CS ioctl was broken
for several kernels and no one reported it.  It was fixed in 3.8/stable.
- Rework GPU reset.  Use the status registers to determine what blocks
to reset.  This better matches the recommended reset programming model.
This also allows us to properly reset blocks besides GFX and DMA.
- Switch the VM set page code to use an IB rather than the ring.  This
fixes overflow issues when doing large page table updates using a small
ring like DMA.
- Several small cleanups and bug fixes.

* 'drm-next-3.9' of git://people.freedesktop.org/~agd5f/linux: (38 commits)
  drm/radeon/dce6: fix display powergating
  drm/radeon: add Oland pci ids
  drm/radeon: radeon-asic updates for Oland
  drm/radeon: add ucode loading support for Oland
  drm/radeon: fill in gpu init for Oland
  drm/radeon: add Oland chip family
  drm/radeon: switch back to using the DMA ring for VM PT updates
  drm/radeon: use IBs for VM page table updates v2
  drm/radeon: don't reset the MC on IGPs/APUs
  drm/radeon: use the reset mask to determine if rings are hung
  drm/radeon: halt engines before disabling MC (si)
  drm/radeon: halt engines before disabling MC (cayman/TN)
  drm/radeon: halt engines before disabling MC (evergreen)
  drm/radeon: halt engines before disabling MC (6xx/7xx)
  drm/radeon: use status regs to determine what to reset (si)
  drm/radeon: use status regs to determine what to reset (cayman)
  drm/radeon: use status regs to determine what to reset (evergreen)
  drm/radeon: use status regs to determine what to reset (6xx/7xx)
  drm/radeon: rework GPU reset on cayman/TN
  drm/radeon: rework GPU reset on cayman/TN
  ...
parents ed914f69 0e3d50bf
...@@ -96,6 +96,7 @@ config DRM_RADEON ...@@ -96,6 +96,7 @@ config DRM_RADEON
select DRM_TTM select DRM_TTM
select POWER_SUPPLY select POWER_SUPPLY
select HWMON select HWMON
select BACKLIGHT_CLASS_DEVICE
help help
Choose this option if you have an ATI Radeon graphics card. There Choose this option if you have an ATI Radeon graphics card. There
are both PCI and AGP versions. You don't need to choose this to are both PCI and AGP versions. You don't need to choose this to
......
config DRM_RADEON_KMS config DRM_RADEON_UMS
bool "Enable modesetting on radeon by default - NEW DRIVER" bool "Enable userspace modesetting on radeon (DEPRECATED)"
depends on DRM_RADEON depends on DRM_RADEON
select BACKLIGHT_CLASS_DEVICE
help help
Choose this option if you want kernel modesetting enabled by default. Choose this option if you still need userspace modesetting.
This is a completely new driver. It's only part of the existing drm Userspace modesetting is deprecated for quite some time now, so
for compatibility reasons. It requires an entirely different graphics enable this only if you have ancient versions of the DDX drivers.
stack above it and works very differently from the old drm stack.
i.e. don't enable this unless you know what you are doing it may
cause issues or bugs compared to the previous userspace driver stack.
When kernel modesetting is enabled the IOCTL of radeon/drm
driver are considered as invalid and an error message is printed
in the log and they return failure.
KMS enabled userspace will use new API to talk with the radeon/drm
driver. The new API provide functions to create/destroy/share/mmap
buffer object which are then managed by the kernel memory manager
(here TTM). In order to submit command to the GPU the userspace
provide a buffer holding the command stream, along this buffer
userspace have to provide a list of buffer object used by the
command stream. The kernel radeon driver will then place buffer
in GPU accessible memory and will update command stream to reflect
the position of the different buffers.
The kernel will also perform security check on command stream
provided by the user, we want to catch and forbid any illegal use
of the GPU such as DMA into random system memory or into memory
not owned by the process supplying the command stream.
...@@ -56,8 +56,12 @@ $(obj)/r600_cs.o: $(obj)/r600_reg_safe.h ...@@ -56,8 +56,12 @@ $(obj)/r600_cs.o: $(obj)/r600_reg_safe.h
$(obj)/evergreen_cs.o: $(obj)/evergreen_reg_safe.h $(obj)/cayman_reg_safe.h $(obj)/evergreen_cs.o: $(obj)/evergreen_reg_safe.h $(obj)/cayman_reg_safe.h
radeon-y := radeon_drv.o radeon_cp.o radeon_state.o radeon_mem.o \ radeon-y := radeon_drv.o
radeon_irq.o r300_cmdbuf.o r600_cp.o
# add UMS driver
radeon-$(CONFIG_DRM_RADEON_UMS)+= radeon_cp.o radeon_state.o radeon_mem.o \
radeon_irq.o r300_cmdbuf.o r600_cp.o r600_blit.o
# add KMS driver # add KMS driver
radeon-y += radeon_device.o radeon_asic.o radeon_kms.o \ radeon-y += radeon_device.o radeon_asic.o radeon_kms.o \
radeon_atombios.o radeon_agp.o atombios_crtc.o radeon_combios.o \ radeon_atombios.o radeon_agp.o atombios_crtc.o radeon_combios.o \
...@@ -67,7 +71,7 @@ radeon-y += radeon_device.o radeon_asic.o radeon_kms.o \ ...@@ -67,7 +71,7 @@ radeon-y += radeon_device.o radeon_asic.o radeon_kms.o \
radeon_clocks.o radeon_fb.o radeon_gem.o radeon_ring.o radeon_irq_kms.o \ radeon_clocks.o radeon_fb.o radeon_gem.o radeon_ring.o radeon_irq_kms.o \
radeon_cs.o radeon_bios.o radeon_benchmark.o r100.o r300.o r420.o \ radeon_cs.o radeon_bios.o radeon_benchmark.o r100.o r300.o r420.o \
rs400.o rs600.o rs690.o rv515.o r520.o r600.o rv770.o radeon_test.o \ rs400.o rs600.o rs690.o rv515.o r520.o r600.o rv770.o radeon_test.o \
r200.o radeon_legacy_tv.o r600_cs.o r600_blit.o r600_blit_shaders.o \ r200.o radeon_legacy_tv.o r600_cs.o r600_blit_shaders.o \
r600_blit_kms.o radeon_pm.o atombios_dp.o r600_audio.o r600_hdmi.o \ r600_blit_kms.o radeon_pm.o atombios_dp.o r600_audio.o r600_hdmi.o \
evergreen.o evergreen_cs.o evergreen_blit_shaders.o evergreen_blit_kms.o \ evergreen.o evergreen_cs.o evergreen_blit_shaders.o evergreen_blit_kms.o \
evergreen_hdmi.o radeon_trace_points.o ni.o cayman_blit_shaders.o \ evergreen_hdmi.o radeon_trace_points.o ni.o cayman_blit_shaders.o \
......
...@@ -252,8 +252,6 @@ void atombios_crtc_dpms(struct drm_crtc *crtc, int mode) ...@@ -252,8 +252,6 @@ void atombios_crtc_dpms(struct drm_crtc *crtc, int mode)
radeon_crtc->enabled = true; radeon_crtc->enabled = true;
/* adjust pm to dpms changes BEFORE enabling crtcs */ /* adjust pm to dpms changes BEFORE enabling crtcs */
radeon_pm_compute_clocks(rdev); radeon_pm_compute_clocks(rdev);
if (ASIC_IS_DCE6(rdev) && !radeon_crtc->in_mode_set)
atombios_powergate_crtc(crtc, ATOM_DISABLE);
atombios_enable_crtc(crtc, ATOM_ENABLE); atombios_enable_crtc(crtc, ATOM_ENABLE);
if (ASIC_IS_DCE3(rdev) && !ASIC_IS_DCE6(rdev)) if (ASIC_IS_DCE3(rdev) && !ASIC_IS_DCE6(rdev))
atombios_enable_crtc_memreq(crtc, ATOM_ENABLE); atombios_enable_crtc_memreq(crtc, ATOM_ENABLE);
...@@ -271,8 +269,6 @@ void atombios_crtc_dpms(struct drm_crtc *crtc, int mode) ...@@ -271,8 +269,6 @@ void atombios_crtc_dpms(struct drm_crtc *crtc, int mode)
atombios_enable_crtc_memreq(crtc, ATOM_DISABLE); atombios_enable_crtc_memreq(crtc, ATOM_DISABLE);
atombios_enable_crtc(crtc, ATOM_DISABLE); atombios_enable_crtc(crtc, ATOM_DISABLE);
radeon_crtc->enabled = false; radeon_crtc->enabled = false;
if (ASIC_IS_DCE6(rdev) && !radeon_crtc->in_mode_set)
atombios_powergate_crtc(crtc, ATOM_ENABLE);
/* adjust pm to dpms changes AFTER disabling crtcs */ /* adjust pm to dpms changes AFTER disabling crtcs */
radeon_pm_compute_clocks(rdev); radeon_pm_compute_clocks(rdev);
break; break;
...@@ -1844,6 +1840,8 @@ static void atombios_crtc_disable(struct drm_crtc *crtc) ...@@ -1844,6 +1840,8 @@ static void atombios_crtc_disable(struct drm_crtc *crtc)
int i; int i;
atombios_crtc_dpms(crtc, DRM_MODE_DPMS_OFF); atombios_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);
if (ASIC_IS_DCE6(rdev))
atombios_powergate_crtc(crtc, ATOM_ENABLE);
for (i = 0; i < rdev->num_crtc; i++) { for (i = 0; i < rdev->num_crtc; i++) {
if (rdev->mode_info.crtcs[i] && if (rdev->mode_info.crtcs[i] &&
......
This diff is collapsed.
This diff is collapsed.
...@@ -223,6 +223,7 @@ ...@@ -223,6 +223,7 @@
#define EVERGREEN_CRTC_STATUS 0x6e8c #define EVERGREEN_CRTC_STATUS 0x6e8c
# define EVERGREEN_CRTC_V_BLANK (1 << 0) # define EVERGREEN_CRTC_V_BLANK (1 << 0)
#define EVERGREEN_CRTC_STATUS_POSITION 0x6e90 #define EVERGREEN_CRTC_STATUS_POSITION 0x6e90
#define EVERGREEN_CRTC_STATUS_HV_COUNT 0x6ea0
#define EVERGREEN_MASTER_UPDATE_MODE 0x6ef8 #define EVERGREEN_MASTER_UPDATE_MODE 0x6ef8
#define EVERGREEN_CRTC_UPDATE_LOCK 0x6ed4 #define EVERGREEN_CRTC_UPDATE_LOCK 0x6ed4
......
...@@ -729,6 +729,18 @@ ...@@ -729,6 +729,18 @@
#define WAIT_UNTIL 0x8040 #define WAIT_UNTIL 0x8040
#define SRBM_STATUS 0x0E50 #define SRBM_STATUS 0x0E50
#define RLC_RQ_PENDING (1 << 3)
#define GRBM_RQ_PENDING (1 << 5)
#define VMC_BUSY (1 << 8)
#define MCB_BUSY (1 << 9)
#define MCB_NON_DISPLAY_BUSY (1 << 10)
#define MCC_BUSY (1 << 11)
#define MCD_BUSY (1 << 12)
#define SEM_BUSY (1 << 14)
#define RLC_BUSY (1 << 15)
#define IH_BUSY (1 << 17)
#define SRBM_STATUS2 0x0EC4
#define DMA_BUSY (1 << 5)
#define SRBM_SOFT_RESET 0x0E60 #define SRBM_SOFT_RESET 0x0E60
#define SRBM_SOFT_RESET_ALL_MASK 0x00FEEFA6 #define SRBM_SOFT_RESET_ALL_MASK 0x00FEEFA6
#define SOFT_RESET_BIF (1 << 1) #define SOFT_RESET_BIF (1 << 1)
...@@ -924,10 +936,13 @@ ...@@ -924,10 +936,13 @@
#define CAYMAN_DMA1_CNTL 0xd82c #define CAYMAN_DMA1_CNTL 0xd82c
/* async DMA packets */ /* async DMA packets */
#define DMA_PACKET(cmd, t, s, n) ((((cmd) & 0xF) << 28) | \ #define DMA_PACKET(cmd, sub_cmd, n) ((((cmd) & 0xF) << 28) | \
(((t) & 0x1) << 23) | \ (((sub_cmd) & 0xFF) << 20) |\
(((s) & 0x1) << 22) | \
(((n) & 0xFFFFF) << 0)) (((n) & 0xFFFFF) << 0))
#define GET_DMA_CMD(h) (((h) & 0xf0000000) >> 28)
#define GET_DMA_COUNT(h) ((h) & 0x000fffff)
#define GET_DMA_SUB_CMD(h) (((h) & 0x0ff00000) >> 20)
/* async DMA Packet types */ /* async DMA Packet types */
#define DMA_PACKET_WRITE 0x2 #define DMA_PACKET_WRITE 0x2
#define DMA_PACKET_COPY 0x3 #define DMA_PACKET_COPY 0x3
...@@ -980,16 +995,7 @@ ...@@ -980,16 +995,7 @@
/* /*
* PM4 * PM4
*/ */
#define PACKET_TYPE0 0 #define PACKET0(reg, n) ((RADEON_PACKET_TYPE0 << 30) | \
#define PACKET_TYPE1 1
#define PACKET_TYPE2 2
#define PACKET_TYPE3 3
#define CP_PACKET_GET_TYPE(h) (((h) >> 30) & 3)
#define CP_PACKET_GET_COUNT(h) (((h) >> 16) & 0x3FFF)
#define CP_PACKET0_GET_REG(h) (((h) & 0xFFFF) << 2)
#define CP_PACKET3_GET_OPCODE(h) (((h) >> 8) & 0xFF)
#define PACKET0(reg, n) ((PACKET_TYPE0 << 30) | \
(((reg) >> 2) & 0xFFFF) | \ (((reg) >> 2) & 0xFFFF) | \
((n) & 0x3FFF) << 16) ((n) & 0x3FFF) << 16)
#define CP_PACKET2 0x80000000 #define CP_PACKET2 0x80000000
...@@ -998,7 +1004,7 @@ ...@@ -998,7 +1004,7 @@
#define PACKET2(v) (CP_PACKET2 | REG_SET(PACKET2_PAD, (v))) #define PACKET2(v) (CP_PACKET2 | REG_SET(PACKET2_PAD, (v)))
#define PACKET3(op, n) ((PACKET_TYPE3 << 30) | \ #define PACKET3(op, n) ((RADEON_PACKET_TYPE3 << 30) | \
(((op) & 0xFF) << 8) | \ (((op) & 0xFF) << 8) | \
((n) & 0x3FFF) << 16) ((n) & 0x3FFF) << 16)
......
This diff is collapsed.
...@@ -49,6 +49,16 @@ ...@@ -49,6 +49,16 @@
#define RINGID(x) (((x) & 0x3) << 0) #define RINGID(x) (((x) & 0x3) << 0)
#define VMID(x) (((x) & 0x7) << 0) #define VMID(x) (((x) & 0x7) << 0)
#define SRBM_STATUS 0x0E50 #define SRBM_STATUS 0x0E50
#define RLC_RQ_PENDING (1 << 3)
#define GRBM_RQ_PENDING (1 << 5)
#define VMC_BUSY (1 << 8)
#define MCB_BUSY (1 << 9)
#define MCB_NON_DISPLAY_BUSY (1 << 10)
#define MCC_BUSY (1 << 11)
#define MCD_BUSY (1 << 12)
#define SEM_BUSY (1 << 14)
#define RLC_BUSY (1 << 15)
#define IH_BUSY (1 << 17)
#define SRBM_SOFT_RESET 0x0E60 #define SRBM_SOFT_RESET 0x0E60
#define SOFT_RESET_BIF (1 << 1) #define SOFT_RESET_BIF (1 << 1)
...@@ -68,6 +78,10 @@ ...@@ -68,6 +78,10 @@
#define SOFT_RESET_REGBB (1 << 22) #define SOFT_RESET_REGBB (1 << 22)
#define SOFT_RESET_ORB (1 << 23) #define SOFT_RESET_ORB (1 << 23)
#define SRBM_STATUS2 0x0EC4
#define DMA_BUSY (1 << 5)
#define DMA1_BUSY (1 << 6)
#define VM_CONTEXT0_REQUEST_RESPONSE 0x1470 #define VM_CONTEXT0_REQUEST_RESPONSE 0x1470
#define REQUEST_TYPE(x) (((x) & 0xf) << 0) #define REQUEST_TYPE(x) (((x) & 0xf) << 0)
#define RESPONSE_TYPE_MASK 0x000000F0 #define RESPONSE_TYPE_MASK 0x000000F0
...@@ -474,16 +488,7 @@ ...@@ -474,16 +488,7 @@
/* /*
* PM4 * PM4
*/ */
#define PACKET_TYPE0 0 #define PACKET0(reg, n) ((RADEON_PACKET_TYPE0 << 30) | \
#define PACKET_TYPE1 1
#define PACKET_TYPE2 2
#define PACKET_TYPE3 3
#define CP_PACKET_GET_TYPE(h) (((h) >> 30) & 3)
#define CP_PACKET_GET_COUNT(h) (((h) >> 16) & 0x3FFF)
#define CP_PACKET0_GET_REG(h) (((h) & 0xFFFF) << 2)
#define CP_PACKET3_GET_OPCODE(h) (((h) >> 8) & 0xFF)
#define PACKET0(reg, n) ((PACKET_TYPE0 << 30) | \
(((reg) >> 2) & 0xFFFF) | \ (((reg) >> 2) & 0xFFFF) | \
((n) & 0x3FFF) << 16) ((n) & 0x3FFF) << 16)
#define CP_PACKET2 0x80000000 #define CP_PACKET2 0x80000000
...@@ -492,7 +497,7 @@ ...@@ -492,7 +497,7 @@
#define PACKET2(v) (CP_PACKET2 | REG_SET(PACKET2_PAD, (v))) #define PACKET2(v) (CP_PACKET2 | REG_SET(PACKET2_PAD, (v)))
#define PACKET3(op, n) ((PACKET_TYPE3 << 30) | \ #define PACKET3(op, n) ((RADEON_PACKET_TYPE3 << 30) | \
(((op) & 0xFF) << 8) | \ (((op) & 0xFF) << 8) | \
((n) & 0x3FFF) << 16) ((n) & 0x3FFF) << 16)
......
This diff is collapsed.
...@@ -81,10 +81,6 @@ struct r100_cs_track { ...@@ -81,10 +81,6 @@ struct r100_cs_track {
int r100_cs_track_check(struct radeon_device *rdev, struct r100_cs_track *track); int r100_cs_track_check(struct radeon_device *rdev, struct r100_cs_track *track);
void r100_cs_track_clear(struct radeon_device *rdev, struct r100_cs_track *track); void r100_cs_track_clear(struct radeon_device *rdev, struct r100_cs_track *track);
int r100_cs_packet_next_reloc(struct radeon_cs_parser *p,
struct radeon_cs_reloc **cs_reloc);
void r100_cs_dump_packet(struct radeon_cs_parser *p,
struct radeon_cs_packet *pkt);
int r100_cs_packet_parse_vline(struct radeon_cs_parser *p); int r100_cs_packet_parse_vline(struct radeon_cs_parser *p);
......
...@@ -64,17 +64,6 @@ ...@@ -64,17 +64,6 @@
REG_SET(PACKET3_IT_OPCODE, (op)) | \ REG_SET(PACKET3_IT_OPCODE, (op)) | \
REG_SET(PACKET3_COUNT, (n))) REG_SET(PACKET3_COUNT, (n)))
#define PACKET_TYPE0 0
#define PACKET_TYPE1 1
#define PACKET_TYPE2 2
#define PACKET_TYPE3 3
#define CP_PACKET_GET_TYPE(h) (((h) >> 30) & 3)
#define CP_PACKET_GET_COUNT(h) (((h) >> 16) & 0x3FFF)
#define CP_PACKET0_GET_REG(h) (((h) & 0x1FFF) << 2)
#define CP_PACKET0_GET_ONE_REG_WR(h) (((h) >> 15) & 1)
#define CP_PACKET3_GET_OPCODE(h) (((h) >> 8) & 0xFF)
/* Registers */ /* Registers */
#define R_0000F0_RBBM_SOFT_RESET 0x0000F0 #define R_0000F0_RBBM_SOFT_RESET 0x0000F0
#define S_0000F0_SOFT_RESET_CP(x) (((x) & 0x1) << 0) #define S_0000F0_SOFT_RESET_CP(x) (((x) & 0x1) << 0)
......
...@@ -162,7 +162,7 @@ int r200_packet0_check(struct radeon_cs_parser *p, ...@@ -162,7 +162,7 @@ int r200_packet0_check(struct radeon_cs_parser *p,
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
break; break;
...@@ -175,11 +175,11 @@ int r200_packet0_check(struct radeon_cs_parser *p, ...@@ -175,11 +175,11 @@ int r200_packet0_check(struct radeon_cs_parser *p,
return r; return r;
break; break;
case RADEON_RB3D_DEPTHOFFSET: case RADEON_RB3D_DEPTHOFFSET:
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
track->zb.robj = reloc->robj; track->zb.robj = reloc->robj;
...@@ -188,11 +188,11 @@ int r200_packet0_check(struct radeon_cs_parser *p, ...@@ -188,11 +188,11 @@ int r200_packet0_check(struct radeon_cs_parser *p,
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset); ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
break; break;
case RADEON_RB3D_COLOROFFSET: case RADEON_RB3D_COLOROFFSET:
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
track->cb[0].robj = reloc->robj; track->cb[0].robj = reloc->robj;
...@@ -207,11 +207,11 @@ int r200_packet0_check(struct radeon_cs_parser *p, ...@@ -207,11 +207,11 @@ int r200_packet0_check(struct radeon_cs_parser *p,
case R200_PP_TXOFFSET_4: case R200_PP_TXOFFSET_4:
case R200_PP_TXOFFSET_5: case R200_PP_TXOFFSET_5:
i = (reg - R200_PP_TXOFFSET_0) / 24; i = (reg - R200_PP_TXOFFSET_0) / 24;
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
if (!(p->cs_flags & RADEON_CS_KEEP_TILING_FLAGS)) { if (!(p->cs_flags & RADEON_CS_KEEP_TILING_FLAGS)) {
...@@ -260,11 +260,11 @@ int r200_packet0_check(struct radeon_cs_parser *p, ...@@ -260,11 +260,11 @@ int r200_packet0_check(struct radeon_cs_parser *p,
case R200_PP_CUBIC_OFFSET_F5_5: case R200_PP_CUBIC_OFFSET_F5_5:
i = (reg - R200_PP_TXOFFSET_0) / 24; i = (reg - R200_PP_TXOFFSET_0) / 24;
face = (reg - ((i * 24) + R200_PP_TXOFFSET_0)) / 4; face = (reg - ((i * 24) + R200_PP_TXOFFSET_0)) / 4;
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
track->textures[i].cube_info[face - 1].offset = idx_value; track->textures[i].cube_info[face - 1].offset = idx_value;
...@@ -278,11 +278,11 @@ int r200_packet0_check(struct radeon_cs_parser *p, ...@@ -278,11 +278,11 @@ int r200_packet0_check(struct radeon_cs_parser *p,
track->zb_dirty = true; track->zb_dirty = true;
break; break;
case RADEON_RB3D_COLORPITCH: case RADEON_RB3D_COLORPITCH:
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
...@@ -355,11 +355,11 @@ int r200_packet0_check(struct radeon_cs_parser *p, ...@@ -355,11 +355,11 @@ int r200_packet0_check(struct radeon_cs_parser *p,
track->zb_dirty = true; track->zb_dirty = true;
break; break;
case RADEON_RB3D_ZPASS_ADDR: case RADEON_RB3D_ZPASS_ADDR:
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset); ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
......
...@@ -615,7 +615,7 @@ static int r300_packet0_check(struct radeon_cs_parser *p, ...@@ -615,7 +615,7 @@ static int r300_packet0_check(struct radeon_cs_parser *p,
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
break; break;
...@@ -630,11 +630,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p, ...@@ -630,11 +630,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p,
case R300_RB3D_COLOROFFSET2: case R300_RB3D_COLOROFFSET2:
case R300_RB3D_COLOROFFSET3: case R300_RB3D_COLOROFFSET3:
i = (reg - R300_RB3D_COLOROFFSET0) >> 2; i = (reg - R300_RB3D_COLOROFFSET0) >> 2;
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
track->cb[i].robj = reloc->robj; track->cb[i].robj = reloc->robj;
...@@ -643,11 +643,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p, ...@@ -643,11 +643,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p,
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset); ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
break; break;
case R300_ZB_DEPTHOFFSET: case R300_ZB_DEPTHOFFSET:
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
track->zb.robj = reloc->robj; track->zb.robj = reloc->robj;
...@@ -672,11 +672,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p, ...@@ -672,11 +672,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p,
case R300_TX_OFFSET_0+56: case R300_TX_OFFSET_0+56:
case R300_TX_OFFSET_0+60: case R300_TX_OFFSET_0+60:
i = (reg - R300_TX_OFFSET_0) >> 2; i = (reg - R300_TX_OFFSET_0) >> 2;
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
...@@ -745,11 +745,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p, ...@@ -745,11 +745,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p,
/* RB3D_COLORPITCH2 */ /* RB3D_COLORPITCH2 */
/* RB3D_COLORPITCH3 */ /* RB3D_COLORPITCH3 */
if (!(p->cs_flags & RADEON_CS_KEEP_TILING_FLAGS)) { if (!(p->cs_flags & RADEON_CS_KEEP_TILING_FLAGS)) {
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
...@@ -830,11 +830,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p, ...@@ -830,11 +830,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p,
case 0x4F24: case 0x4F24:
/* ZB_DEPTHPITCH */ /* ZB_DEPTHPITCH */
if (!(p->cs_flags & RADEON_CS_KEEP_TILING_FLAGS)) { if (!(p->cs_flags & RADEON_CS_KEEP_TILING_FLAGS)) {
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
...@@ -1045,11 +1045,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p, ...@@ -1045,11 +1045,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p,
track->tex_dirty = true; track->tex_dirty = true;
break; break;
case R300_ZB_ZPASS_ADDR: case R300_ZB_ZPASS_ADDR:
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset); ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
...@@ -1087,11 +1087,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p, ...@@ -1087,11 +1087,11 @@ static int r300_packet0_check(struct radeon_cs_parser *p,
track->cb_dirty = true; track->cb_dirty = true;
break; break;
case R300_RB3D_AARESOLVE_OFFSET: case R300_RB3D_AARESOLVE_OFFSET:
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for ib[%d]=0x%04X\n", DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
idx, reg); idx, reg);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
track->aa.robj = reloc->robj; track->aa.robj = reloc->robj;
...@@ -1156,10 +1156,10 @@ static int r300_packet3_check(struct radeon_cs_parser *p, ...@@ -1156,10 +1156,10 @@ static int r300_packet3_check(struct radeon_cs_parser *p,
return r; return r;
break; break;
case PACKET3_INDX_BUFFER: case PACKET3_INDX_BUFFER:
r = r100_cs_packet_next_reloc(p, &reloc); r = radeon_cs_packet_next_reloc(p, &reloc, 0);
if (r) { if (r) {
DRM_ERROR("No reloc for packet3 %d\n", pkt->opcode); DRM_ERROR("No reloc for packet3 %d\n", pkt->opcode);
r100_cs_dump_packet(p, pkt); radeon_cs_dump_packet(p, pkt);
return r; return r;
} }
ib[idx+1] = radeon_get_ib_value(p, idx + 1) + ((u32)reloc->lobj.gpu_offset); ib[idx+1] = radeon_get_ib_value(p, idx + 1) + ((u32)reloc->lobj.gpu_offset);
...@@ -1257,21 +1257,21 @@ int r300_cs_parse(struct radeon_cs_parser *p) ...@@ -1257,21 +1257,21 @@ int r300_cs_parse(struct radeon_cs_parser *p)
r100_cs_track_clear(p->rdev, track); r100_cs_track_clear(p->rdev, track);
p->track = track; p->track = track;
do { do {
r = r100_cs_packet_parse(p, &pkt, p->idx); r = radeon_cs_packet_parse(p, &pkt, p->idx);
if (r) { if (r) {
return r; return r;
} }
p->idx += pkt.count + 2; p->idx += pkt.count + 2;
switch (pkt.type) { switch (pkt.type) {
case PACKET_TYPE0: case RADEON_PACKET_TYPE0:
r = r100_cs_parse_packet0(p, &pkt, r = r100_cs_parse_packet0(p, &pkt,
p->rdev->config.r300.reg_safe_bm, p->rdev->config.r300.reg_safe_bm,
p->rdev->config.r300.reg_safe_bm_size, p->rdev->config.r300.reg_safe_bm_size,
&r300_packet0_check); &r300_packet0_check);
break; break;
case PACKET_TYPE2: case RADEON_PACKET_TYPE2:
break; break;
case PACKET_TYPE3: case RADEON_PACKET_TYPE3:
r = r300_packet3_check(p, &pkt); r = r300_packet3_check(p, &pkt);
break; break;
default: default:
......
...@@ -29,6 +29,8 @@ ...@@ -29,6 +29,8 @@
* *
* Authors: * Authors:
* Nicolai Haehnle <prefect_@gmx.net> * Nicolai Haehnle <prefect_@gmx.net>
*
* ------------------------ This file is DEPRECATED! -------------------------
*/ */
#include <drm/drmP.h> #include <drm/drmP.h>
......
...@@ -65,17 +65,6 @@ ...@@ -65,17 +65,6 @@
REG_SET(PACKET3_IT_OPCODE, (op)) | \ REG_SET(PACKET3_IT_OPCODE, (op)) | \
REG_SET(PACKET3_COUNT, (n))) REG_SET(PACKET3_COUNT, (n)))
#define PACKET_TYPE0 0
#define PACKET_TYPE1 1
#define PACKET_TYPE2 2
#define PACKET_TYPE3 3
#define CP_PACKET_GET_TYPE(h) (((h) >> 30) & 3)
#define CP_PACKET_GET_COUNT(h) (((h) >> 16) & 0x3FFF)
#define CP_PACKET0_GET_REG(h) (((h) & 0x1FFF) << 2)
#define CP_PACKET0_GET_ONE_REG_WR(h) (((h) >> 15) & 1)
#define CP_PACKET3_GET_OPCODE(h) (((h) >> 8) & 0xFF)
/* Registers */ /* Registers */
#define R_000148_MC_FB_LOCATION 0x000148 #define R_000148_MC_FB_LOCATION 0x000148
#define S_000148_MC_FB_START(x) (((x) & 0xFFFF) << 0) #define S_000148_MC_FB_START(x) (((x) & 0xFFFF) << 0)
......
...@@ -355,6 +355,7 @@ ...@@ -355,6 +355,7 @@
# define AVIVO_D1CRTC_V_BLANK (1 << 0) # define AVIVO_D1CRTC_V_BLANK (1 << 0)
#define AVIVO_D1CRTC_STATUS_POSITION 0x60a0 #define AVIVO_D1CRTC_STATUS_POSITION 0x60a0
#define AVIVO_D1CRTC_FRAME_COUNT 0x60a4 #define AVIVO_D1CRTC_FRAME_COUNT 0x60a4
#define AVIVO_D1CRTC_STATUS_HV_COUNT 0x60ac
#define AVIVO_D1CRTC_STEREO_CONTROL 0x60c4 #define AVIVO_D1CRTC_STEREO_CONTROL 0x60c4
#define AVIVO_D1MODE_MASTER_UPDATE_MODE 0x60e4 #define AVIVO_D1MODE_MASTER_UPDATE_MODE 0x60e4
......
This diff is collapsed.
...@@ -22,6 +22,8 @@ ...@@ -22,6 +22,8 @@
* *
* Authors: * Authors:
* Alex Deucher <alexander.deucher@amd.com> * Alex Deucher <alexander.deucher@amd.com>
*
* ------------------------ This file is DEPRECATED! -------------------------
*/ */
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/radeon_drm.h> #include <drm/radeon_drm.h>
...@@ -488,37 +490,6 @@ set_default_state(drm_radeon_private_t *dev_priv) ...@@ -488,37 +490,6 @@ set_default_state(drm_radeon_private_t *dev_priv)
ADVANCE_RING(); ADVANCE_RING();
} }
/* 23 bits of float fractional data */
#define I2F_FRAC_BITS 23
#define I2F_MASK ((1 << I2F_FRAC_BITS) - 1)
/*
* Converts unsigned integer into 32-bit IEEE floating point representation.
* Will be exact from 0 to 2^24. Above that, we round towards zero
* as the fractional bits will not fit in a float. (It would be better to
* round towards even as the fpu does, but that is slower.)
*/
__pure uint32_t int2float(uint32_t x)
{
uint32_t msb, exponent, fraction;
/* Zero is special */
if (!x) return 0;
/* Get location of the most significant bit */
msb = __fls(x);
/*
* Use a rotate instead of a shift because that works both leftwards
* and rightwards due to the mod(32) behaviour. This means we don't
* need to check to see if we are above 2^24 or not.
*/
fraction = ror32(x, (msb - I2F_FRAC_BITS) & 0x1f) & I2F_MASK;
exponent = (127 + msb) << I2F_FRAC_BITS;
return fraction + exponent;
}
static int r600_nomm_get_vb(struct drm_device *dev) static int r600_nomm_get_vb(struct drm_device *dev)
{ {
drm_radeon_private_t *dev_priv = dev->dev_private; drm_radeon_private_t *dev_priv = dev->dev_private;
......
...@@ -31,6 +31,37 @@ ...@@ -31,6 +31,37 @@
#include "r600_blit_shaders.h" #include "r600_blit_shaders.h"
#include "radeon_blit_common.h" #include "radeon_blit_common.h"
/* 23 bits of float fractional data */
#define I2F_FRAC_BITS 23
#define I2F_MASK ((1 << I2F_FRAC_BITS) - 1)
/*
* Converts unsigned integer into 32-bit IEEE floating point representation.
* Will be exact from 0 to 2^24. Above that, we round towards zero
* as the fractional bits will not fit in a float. (It would be better to
* round towards even as the fpu does, but that is slower.)
*/
__pure uint32_t int2float(uint32_t x)
{
uint32_t msb, exponent, fraction;
/* Zero is special */
if (!x) return 0;
/* Get location of the most significant bit */
msb = __fls(x);
/*
* Use a rotate instead of a shift because that works both leftwards
* and rightwards due to the mod(32) behaviour. This means we don't
* need to check to see if we are above 2^24 or not.
*/
fraction = ror32(x, (msb - I2F_FRAC_BITS) & 0x1f) & I2F_MASK;
exponent = (127 + msb) << I2F_FRAC_BITS;
return fraction + exponent;
}
/* emits 21 on rv770+, 23 on r600 */ /* emits 21 on rv770+, 23 on r600 */
static void static void
set_render_target(struct radeon_device *rdev, int format, set_render_target(struct radeon_device *rdev, int format,
......
...@@ -24,6 +24,8 @@ ...@@ -24,6 +24,8 @@
* Authors: * Authors:
* Dave Airlie <airlied@redhat.com> * Dave Airlie <airlied@redhat.com>
* Alex Deucher <alexander.deucher@amd.com> * Alex Deucher <alexander.deucher@amd.com>
*
* ------------------------ This file is DEPRECATED! -------------------------
*/ */
#include <linux/module.h> #include <linux/module.h>
......
This diff is collapsed.
...@@ -182,6 +182,8 @@ ...@@ -182,6 +182,8 @@
#define CP_COHER_BASE 0x85F8 #define CP_COHER_BASE 0x85F8
#define CP_DEBUG 0xC1FC #define CP_DEBUG 0xC1FC
#define R_0086D8_CP_ME_CNTL 0x86D8 #define R_0086D8_CP_ME_CNTL 0x86D8
#define S_0086D8_CP_PFP_HALT(x) (((x) & 1)<<26)
#define C_0086D8_CP_PFP_HALT(x) ((x) & 0xFBFFFFFF)
#define S_0086D8_CP_ME_HALT(x) (((x) & 1)<<28) #define S_0086D8_CP_ME_HALT(x) (((x) & 1)<<28)
#define C_0086D8_CP_ME_HALT(x) ((x) & 0xEFFFFFFF) #define C_0086D8_CP_ME_HALT(x) ((x) & 0xEFFFFFFF)
#define CP_ME_RAM_DATA 0xC160 #define CP_ME_RAM_DATA 0xC160
...@@ -1143,19 +1145,10 @@ ...@@ -1143,19 +1145,10 @@
/* /*
* PM4 * PM4
*/ */
#define PACKET_TYPE0 0 #define PACKET0(reg, n) ((RADEON_PACKET_TYPE0 << 30) | \
#define PACKET_TYPE1 1
#define PACKET_TYPE2 2
#define PACKET_TYPE3 3
#define CP_PACKET_GET_TYPE(h) (((h) >> 30) & 3)
#define CP_PACKET_GET_COUNT(h) (((h) >> 16) & 0x3FFF)
#define CP_PACKET0_GET_REG(h) (((h) & 0xFFFF) << 2)
#define CP_PACKET3_GET_OPCODE(h) (((h) >> 8) & 0xFF)
#define PACKET0(reg, n) ((PACKET_TYPE0 << 30) | \
(((reg) >> 2) & 0xFFFF) | \ (((reg) >> 2) & 0xFFFF) | \
((n) & 0x3FFF) << 16) ((n) & 0x3FFF) << 16)
#define PACKET3(op, n) ((PACKET_TYPE3 << 30) | \ #define PACKET3(op, n) ((RADEON_PACKET_TYPE3 << 30) | \
(((op) & 0xFF) << 8) | \ (((op) & 0xFF) << 8) | \
((n) & 0x3FFF) << 16) ((n) & 0x3FFF) << 16)
...@@ -1328,6 +1321,7 @@ ...@@ -1328,6 +1321,7 @@
#define G_008010_VC_BUSY(x) (((x) >> 11) & 1) #define G_008010_VC_BUSY(x) (((x) >> 11) & 1)
#define G_008010_DB03_CLEAN(x) (((x) >> 12) & 1) #define G_008010_DB03_CLEAN(x) (((x) >> 12) & 1)
#define G_008010_CB03_CLEAN(x) (((x) >> 13) & 1) #define G_008010_CB03_CLEAN(x) (((x) >> 13) & 1)
#define G_008010_TA_BUSY(x) (((x) >> 14) & 1)
#define G_008010_VGT_BUSY_NO_DMA(x) (((x) >> 16) & 1) #define G_008010_VGT_BUSY_NO_DMA(x) (((x) >> 16) & 1)
#define G_008010_VGT_BUSY(x) (((x) >> 17) & 1) #define G_008010_VGT_BUSY(x) (((x) >> 17) & 1)
#define G_008010_TA03_BUSY(x) (((x) >> 18) & 1) #define G_008010_TA03_BUSY(x) (((x) >> 18) & 1)
...@@ -1395,6 +1389,7 @@ ...@@ -1395,6 +1389,7 @@
#define G_000E50_MCDW_BUSY(x) (((x) >> 13) & 1) #define G_000E50_MCDW_BUSY(x) (((x) >> 13) & 1)
#define G_000E50_SEM_BUSY(x) (((x) >> 14) & 1) #define G_000E50_SEM_BUSY(x) (((x) >> 14) & 1)
#define G_000E50_RLC_BUSY(x) (((x) >> 15) & 1) #define G_000E50_RLC_BUSY(x) (((x) >> 15) & 1)
#define G_000E50_IH_BUSY(x) (((x) >> 17) & 1)
#define G_000E50_BIF_BUSY(x) (((x) >> 29) & 1) #define G_000E50_BIF_BUSY(x) (((x) >> 29) & 1)
#define R_000E60_SRBM_SOFT_RESET 0x0E60 #define R_000E60_SRBM_SOFT_RESET 0x0E60
#define S_000E60_SOFT_RESET_BIF(x) (((x) & 1) << 1) #define S_000E60_SOFT_RESET_BIF(x) (((x) & 1) << 1)
......
...@@ -136,6 +136,15 @@ extern int radeon_lockup_timeout; ...@@ -136,6 +136,15 @@ extern int radeon_lockup_timeout;
#define RADEON_RESET_GFX (1 << 0) #define RADEON_RESET_GFX (1 << 0)
#define RADEON_RESET_COMPUTE (1 << 1) #define RADEON_RESET_COMPUTE (1 << 1)
#define RADEON_RESET_DMA (1 << 2) #define RADEON_RESET_DMA (1 << 2)
#define RADEON_RESET_CP (1 << 3)
#define RADEON_RESET_GRBM (1 << 4)
#define RADEON_RESET_DMA1 (1 << 5)
#define RADEON_RESET_RLC (1 << 6)
#define RADEON_RESET_SEM (1 << 7)
#define RADEON_RESET_IH (1 << 8)
#define RADEON_RESET_VMC (1 << 9)
#define RADEON_RESET_MC (1 << 10)
#define RADEON_RESET_DISPLAY (1 << 11)
/* /*
* Errata workarounds. * Errata workarounds.
...@@ -771,6 +780,7 @@ int radeon_ib_get(struct radeon_device *rdev, int ring, ...@@ -771,6 +780,7 @@ int radeon_ib_get(struct radeon_device *rdev, int ring,
struct radeon_ib *ib, struct radeon_vm *vm, struct radeon_ib *ib, struct radeon_vm *vm,
unsigned size); unsigned size);
void radeon_ib_free(struct radeon_device *rdev, struct radeon_ib *ib); void radeon_ib_free(struct radeon_device *rdev, struct radeon_ib *ib);
void radeon_ib_sync_to(struct radeon_ib *ib, struct radeon_fence *fence);
int radeon_ib_schedule(struct radeon_device *rdev, struct radeon_ib *ib, int radeon_ib_schedule(struct radeon_device *rdev, struct radeon_ib *ib,
struct radeon_ib *const_ib); struct radeon_ib *const_ib);
int radeon_ib_pool_init(struct radeon_device *rdev); int radeon_ib_pool_init(struct radeon_device *rdev);
...@@ -1179,7 +1189,9 @@ struct radeon_asic { ...@@ -1179,7 +1189,9 @@ struct radeon_asic {
void (*fini)(struct radeon_device *rdev); void (*fini)(struct radeon_device *rdev);
u32 pt_ring_index; u32 pt_ring_index;
void (*set_page)(struct radeon_device *rdev, uint64_t pe, void (*set_page)(struct radeon_device *rdev,
struct radeon_ib *ib,
uint64_t pe,
uint64_t addr, unsigned count, uint64_t addr, unsigned count,
uint32_t incr, uint32_t flags); uint32_t incr, uint32_t flags);
} vm; } vm;
...@@ -1757,6 +1769,7 @@ void r100_pll_errata_after_index(struct radeon_device *rdev); ...@@ -1757,6 +1769,7 @@ void r100_pll_errata_after_index(struct radeon_device *rdev);
#define ASIC_IS_DCE6(rdev) ((rdev->family >= CHIP_ARUBA)) #define ASIC_IS_DCE6(rdev) ((rdev->family >= CHIP_ARUBA))
#define ASIC_IS_DCE61(rdev) ((rdev->family >= CHIP_ARUBA) && \ #define ASIC_IS_DCE61(rdev) ((rdev->family >= CHIP_ARUBA) && \
(rdev->flags & RADEON_IS_IGP)) (rdev->flags & RADEON_IS_IGP))
#define ASIC_IS_DCE64(rdev) ((rdev->family == CHIP_OLAND))
/* /*
* BIOS helpers. * BIOS helpers.
...@@ -1801,7 +1814,7 @@ void radeon_ring_write(struct radeon_ring *ring, uint32_t v); ...@@ -1801,7 +1814,7 @@ void radeon_ring_write(struct radeon_ring *ring, uint32_t v);
#define radeon_gart_set_page(rdev, i, p) (rdev)->asic->gart.set_page((rdev), (i), (p)) #define radeon_gart_set_page(rdev, i, p) (rdev)->asic->gart.set_page((rdev), (i), (p))
#define radeon_asic_vm_init(rdev) (rdev)->asic->vm.init((rdev)) #define radeon_asic_vm_init(rdev) (rdev)->asic->vm.init((rdev))
#define radeon_asic_vm_fini(rdev) (rdev)->asic->vm.fini((rdev)) #define radeon_asic_vm_fini(rdev) (rdev)->asic->vm.fini((rdev))
#define radeon_asic_vm_set_page(rdev, pe, addr, count, incr, flags) ((rdev)->asic->vm.set_page((rdev), (pe), (addr), (count), (incr), (flags))) #define radeon_asic_vm_set_page(rdev, ib, pe, addr, count, incr, flags) ((rdev)->asic->vm.set_page((rdev), (ib), (pe), (addr), (count), (incr), (flags)))
#define radeon_ring_start(rdev, r, cp) (rdev)->asic->ring[(r)].ring_start((rdev), (cp)) #define radeon_ring_start(rdev, r, cp) (rdev)->asic->ring[(r)].ring_start((rdev), (cp))
#define radeon_ring_test(rdev, r, cp) (rdev)->asic->ring[(r)].ring_test((rdev), (cp)) #define radeon_ring_test(rdev, r, cp) (rdev)->asic->ring[(r)].ring_test((rdev), (cp))
#define radeon_ib_test(rdev, r, cp) (rdev)->asic->ring[(r)].ib_test((rdev), (cp)) #define radeon_ib_test(rdev, r, cp) (rdev)->asic->ring[(r)].ib_test((rdev), (cp))
...@@ -1851,6 +1864,7 @@ void radeon_ring_write(struct radeon_ring *ring, uint32_t v); ...@@ -1851,6 +1864,7 @@ void radeon_ring_write(struct radeon_ring *ring, uint32_t v);
/* Common functions */ /* Common functions */
/* AGP */ /* AGP */
extern int radeon_gpu_reset(struct radeon_device *rdev); extern int radeon_gpu_reset(struct radeon_device *rdev);
extern void r600_set_bios_scratch_engine_hung(struct radeon_device *rdev, bool hung);
extern void radeon_agp_disable(struct radeon_device *rdev); extern void radeon_agp_disable(struct radeon_device *rdev);
extern int radeon_modeset_init(struct radeon_device *rdev); extern int radeon_modeset_init(struct radeon_device *rdev);
extern void radeon_modeset_fini(struct radeon_device *rdev); extern void radeon_modeset_fini(struct radeon_device *rdev);
...@@ -1972,6 +1986,19 @@ static inline int radeon_acpi_init(struct radeon_device *rdev) { return 0; } ...@@ -1972,6 +1986,19 @@ static inline int radeon_acpi_init(struct radeon_device *rdev) { return 0; }
static inline void radeon_acpi_fini(struct radeon_device *rdev) { } static inline void radeon_acpi_fini(struct radeon_device *rdev) { }
#endif #endif
int radeon_cs_packet_parse(struct radeon_cs_parser *p,
struct radeon_cs_packet *pkt,
unsigned idx);
bool radeon_cs_packet_next_is_pkt3_nop(struct radeon_cs_parser *p);
void radeon_cs_dump_packet(struct radeon_cs_parser *p,
struct radeon_cs_packet *pkt);
int radeon_cs_packet_next_reloc(struct radeon_cs_parser *p,
struct radeon_cs_reloc **cs_reloc,
int nomm);
int r600_cs_common_vline_parse(struct radeon_cs_parser *p,
uint32_t *vline_start_end,
uint32_t *vline_status);
#include "radeon_object.h" #include "radeon_object.h"
#endif #endif
...@@ -946,7 +946,7 @@ static struct radeon_asic r600_asic = { ...@@ -946,7 +946,7 @@ static struct radeon_asic r600_asic = {
.cs_parse = &r600_cs_parse, .cs_parse = &r600_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &r600_gpu_is_lockup, .is_lockup = &r600_gfx_is_lockup,
}, },
[R600_RING_TYPE_DMA_INDEX] = { [R600_RING_TYPE_DMA_INDEX] = {
.ib_execute = &r600_dma_ring_ib_execute, .ib_execute = &r600_dma_ring_ib_execute,
...@@ -1030,7 +1030,7 @@ static struct radeon_asic rs780_asic = { ...@@ -1030,7 +1030,7 @@ static struct radeon_asic rs780_asic = {
.cs_parse = &r600_cs_parse, .cs_parse = &r600_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &r600_gpu_is_lockup, .is_lockup = &r600_gfx_is_lockup,
}, },
[R600_RING_TYPE_DMA_INDEX] = { [R600_RING_TYPE_DMA_INDEX] = {
.ib_execute = &r600_dma_ring_ib_execute, .ib_execute = &r600_dma_ring_ib_execute,
...@@ -1114,7 +1114,7 @@ static struct radeon_asic rv770_asic = { ...@@ -1114,7 +1114,7 @@ static struct radeon_asic rv770_asic = {
.cs_parse = &r600_cs_parse, .cs_parse = &r600_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &r600_gpu_is_lockup, .is_lockup = &r600_gfx_is_lockup,
}, },
[R600_RING_TYPE_DMA_INDEX] = { [R600_RING_TYPE_DMA_INDEX] = {
.ib_execute = &r600_dma_ring_ib_execute, .ib_execute = &r600_dma_ring_ib_execute,
...@@ -1198,7 +1198,7 @@ static struct radeon_asic evergreen_asic = { ...@@ -1198,7 +1198,7 @@ static struct radeon_asic evergreen_asic = {
.cs_parse = &evergreen_cs_parse, .cs_parse = &evergreen_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &evergreen_gpu_is_lockup, .is_lockup = &evergreen_gfx_is_lockup,
}, },
[R600_RING_TYPE_DMA_INDEX] = { [R600_RING_TYPE_DMA_INDEX] = {
.ib_execute = &evergreen_dma_ring_ib_execute, .ib_execute = &evergreen_dma_ring_ib_execute,
...@@ -1207,7 +1207,7 @@ static struct radeon_asic evergreen_asic = { ...@@ -1207,7 +1207,7 @@ static struct radeon_asic evergreen_asic = {
.cs_parse = &evergreen_dma_cs_parse, .cs_parse = &evergreen_dma_cs_parse,
.ring_test = &r600_dma_ring_test, .ring_test = &r600_dma_ring_test,
.ib_test = &r600_dma_ib_test, .ib_test = &r600_dma_ib_test,
.is_lockup = &r600_dma_is_lockup, .is_lockup = &evergreen_dma_is_lockup,
} }
}, },
.irq = { .irq = {
...@@ -1282,7 +1282,7 @@ static struct radeon_asic sumo_asic = { ...@@ -1282,7 +1282,7 @@ static struct radeon_asic sumo_asic = {
.cs_parse = &evergreen_cs_parse, .cs_parse = &evergreen_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &evergreen_gpu_is_lockup, .is_lockup = &evergreen_gfx_is_lockup,
}, },
[R600_RING_TYPE_DMA_INDEX] = { [R600_RING_TYPE_DMA_INDEX] = {
.ib_execute = &evergreen_dma_ring_ib_execute, .ib_execute = &evergreen_dma_ring_ib_execute,
...@@ -1291,7 +1291,7 @@ static struct radeon_asic sumo_asic = { ...@@ -1291,7 +1291,7 @@ static struct radeon_asic sumo_asic = {
.cs_parse = &evergreen_dma_cs_parse, .cs_parse = &evergreen_dma_cs_parse,
.ring_test = &r600_dma_ring_test, .ring_test = &r600_dma_ring_test,
.ib_test = &r600_dma_ib_test, .ib_test = &r600_dma_ib_test,
.is_lockup = &r600_dma_is_lockup, .is_lockup = &evergreen_dma_is_lockup,
} }
}, },
.irq = { .irq = {
...@@ -1366,7 +1366,7 @@ static struct radeon_asic btc_asic = { ...@@ -1366,7 +1366,7 @@ static struct radeon_asic btc_asic = {
.cs_parse = &evergreen_cs_parse, .cs_parse = &evergreen_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &evergreen_gpu_is_lockup, .is_lockup = &evergreen_gfx_is_lockup,
}, },
[R600_RING_TYPE_DMA_INDEX] = { [R600_RING_TYPE_DMA_INDEX] = {
.ib_execute = &evergreen_dma_ring_ib_execute, .ib_execute = &evergreen_dma_ring_ib_execute,
...@@ -1375,7 +1375,7 @@ static struct radeon_asic btc_asic = { ...@@ -1375,7 +1375,7 @@ static struct radeon_asic btc_asic = {
.cs_parse = &evergreen_dma_cs_parse, .cs_parse = &evergreen_dma_cs_parse,
.ring_test = &r600_dma_ring_test, .ring_test = &r600_dma_ring_test,
.ib_test = &r600_dma_ib_test, .ib_test = &r600_dma_ib_test,
.is_lockup = &r600_dma_is_lockup, .is_lockup = &evergreen_dma_is_lockup,
} }
}, },
.irq = { .irq = {
...@@ -1445,7 +1445,7 @@ static struct radeon_asic cayman_asic = { ...@@ -1445,7 +1445,7 @@ static struct radeon_asic cayman_asic = {
.vm = { .vm = {
.init = &cayman_vm_init, .init = &cayman_vm_init,
.fini = &cayman_vm_fini, .fini = &cayman_vm_fini,
.pt_ring_index = RADEON_RING_TYPE_GFX_INDEX, .pt_ring_index = R600_RING_TYPE_DMA_INDEX,
.set_page = &cayman_vm_set_page, .set_page = &cayman_vm_set_page,
}, },
.ring = { .ring = {
...@@ -1457,7 +1457,7 @@ static struct radeon_asic cayman_asic = { ...@@ -1457,7 +1457,7 @@ static struct radeon_asic cayman_asic = {
.cs_parse = &evergreen_cs_parse, .cs_parse = &evergreen_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &evergreen_gpu_is_lockup, .is_lockup = &cayman_gfx_is_lockup,
.vm_flush = &cayman_vm_flush, .vm_flush = &cayman_vm_flush,
}, },
[CAYMAN_RING_TYPE_CP1_INDEX] = { [CAYMAN_RING_TYPE_CP1_INDEX] = {
...@@ -1468,7 +1468,7 @@ static struct radeon_asic cayman_asic = { ...@@ -1468,7 +1468,7 @@ static struct radeon_asic cayman_asic = {
.cs_parse = &evergreen_cs_parse, .cs_parse = &evergreen_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &evergreen_gpu_is_lockup, .is_lockup = &cayman_gfx_is_lockup,
.vm_flush = &cayman_vm_flush, .vm_flush = &cayman_vm_flush,
}, },
[CAYMAN_RING_TYPE_CP2_INDEX] = { [CAYMAN_RING_TYPE_CP2_INDEX] = {
...@@ -1479,7 +1479,7 @@ static struct radeon_asic cayman_asic = { ...@@ -1479,7 +1479,7 @@ static struct radeon_asic cayman_asic = {
.cs_parse = &evergreen_cs_parse, .cs_parse = &evergreen_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &evergreen_gpu_is_lockup, .is_lockup = &cayman_gfx_is_lockup,
.vm_flush = &cayman_vm_flush, .vm_flush = &cayman_vm_flush,
}, },
[R600_RING_TYPE_DMA_INDEX] = { [R600_RING_TYPE_DMA_INDEX] = {
...@@ -1572,7 +1572,7 @@ static struct radeon_asic trinity_asic = { ...@@ -1572,7 +1572,7 @@ static struct radeon_asic trinity_asic = {
.vm = { .vm = {
.init = &cayman_vm_init, .init = &cayman_vm_init,
.fini = &cayman_vm_fini, .fini = &cayman_vm_fini,
.pt_ring_index = RADEON_RING_TYPE_GFX_INDEX, .pt_ring_index = R600_RING_TYPE_DMA_INDEX,
.set_page = &cayman_vm_set_page, .set_page = &cayman_vm_set_page,
}, },
.ring = { .ring = {
...@@ -1584,7 +1584,7 @@ static struct radeon_asic trinity_asic = { ...@@ -1584,7 +1584,7 @@ static struct radeon_asic trinity_asic = {
.cs_parse = &evergreen_cs_parse, .cs_parse = &evergreen_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &evergreen_gpu_is_lockup, .is_lockup = &cayman_gfx_is_lockup,
.vm_flush = &cayman_vm_flush, .vm_flush = &cayman_vm_flush,
}, },
[CAYMAN_RING_TYPE_CP1_INDEX] = { [CAYMAN_RING_TYPE_CP1_INDEX] = {
...@@ -1595,7 +1595,7 @@ static struct radeon_asic trinity_asic = { ...@@ -1595,7 +1595,7 @@ static struct radeon_asic trinity_asic = {
.cs_parse = &evergreen_cs_parse, .cs_parse = &evergreen_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &evergreen_gpu_is_lockup, .is_lockup = &cayman_gfx_is_lockup,
.vm_flush = &cayman_vm_flush, .vm_flush = &cayman_vm_flush,
}, },
[CAYMAN_RING_TYPE_CP2_INDEX] = { [CAYMAN_RING_TYPE_CP2_INDEX] = {
...@@ -1606,7 +1606,7 @@ static struct radeon_asic trinity_asic = { ...@@ -1606,7 +1606,7 @@ static struct radeon_asic trinity_asic = {
.cs_parse = &evergreen_cs_parse, .cs_parse = &evergreen_cs_parse,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &evergreen_gpu_is_lockup, .is_lockup = &cayman_gfx_is_lockup,
.vm_flush = &cayman_vm_flush, .vm_flush = &cayman_vm_flush,
}, },
[R600_RING_TYPE_DMA_INDEX] = { [R600_RING_TYPE_DMA_INDEX] = {
...@@ -1699,7 +1699,7 @@ static struct radeon_asic si_asic = { ...@@ -1699,7 +1699,7 @@ static struct radeon_asic si_asic = {
.vm = { .vm = {
.init = &si_vm_init, .init = &si_vm_init,
.fini = &si_vm_fini, .fini = &si_vm_fini,
.pt_ring_index = RADEON_RING_TYPE_GFX_INDEX, .pt_ring_index = R600_RING_TYPE_DMA_INDEX,
.set_page = &si_vm_set_page, .set_page = &si_vm_set_page,
}, },
.ring = { .ring = {
...@@ -1711,7 +1711,7 @@ static struct radeon_asic si_asic = { ...@@ -1711,7 +1711,7 @@ static struct radeon_asic si_asic = {
.cs_parse = NULL, .cs_parse = NULL,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &si_gpu_is_lockup, .is_lockup = &si_gfx_is_lockup,
.vm_flush = &si_vm_flush, .vm_flush = &si_vm_flush,
}, },
[CAYMAN_RING_TYPE_CP1_INDEX] = { [CAYMAN_RING_TYPE_CP1_INDEX] = {
...@@ -1722,7 +1722,7 @@ static struct radeon_asic si_asic = { ...@@ -1722,7 +1722,7 @@ static struct radeon_asic si_asic = {
.cs_parse = NULL, .cs_parse = NULL,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &si_gpu_is_lockup, .is_lockup = &si_gfx_is_lockup,
.vm_flush = &si_vm_flush, .vm_flush = &si_vm_flush,
}, },
[CAYMAN_RING_TYPE_CP2_INDEX] = { [CAYMAN_RING_TYPE_CP2_INDEX] = {
...@@ -1733,7 +1733,7 @@ static struct radeon_asic si_asic = { ...@@ -1733,7 +1733,7 @@ static struct radeon_asic si_asic = {
.cs_parse = NULL, .cs_parse = NULL,
.ring_test = &r600_ring_test, .ring_test = &r600_ring_test,
.ib_test = &r600_ib_test, .ib_test = &r600_ib_test,
.is_lockup = &si_gpu_is_lockup, .is_lockup = &si_gfx_is_lockup,
.vm_flush = &si_vm_flush, .vm_flush = &si_vm_flush,
}, },
[R600_RING_TYPE_DMA_INDEX] = { [R600_RING_TYPE_DMA_INDEX] = {
...@@ -1744,7 +1744,7 @@ static struct radeon_asic si_asic = { ...@@ -1744,7 +1744,7 @@ static struct radeon_asic si_asic = {
.cs_parse = NULL, .cs_parse = NULL,
.ring_test = &r600_dma_ring_test, .ring_test = &r600_dma_ring_test,
.ib_test = &r600_dma_ib_test, .ib_test = &r600_dma_ib_test,
.is_lockup = &cayman_dma_is_lockup, .is_lockup = &si_dma_is_lockup,
.vm_flush = &si_dma_vm_flush, .vm_flush = &si_dma_vm_flush,
}, },
[CAYMAN_RING_TYPE_DMA1_INDEX] = { [CAYMAN_RING_TYPE_DMA1_INDEX] = {
...@@ -1755,7 +1755,7 @@ static struct radeon_asic si_asic = { ...@@ -1755,7 +1755,7 @@ static struct radeon_asic si_asic = {
.cs_parse = NULL, .cs_parse = NULL,
.ring_test = &r600_dma_ring_test, .ring_test = &r600_dma_ring_test,
.ib_test = &r600_dma_ib_test, .ib_test = &r600_dma_ib_test,
.is_lockup = &cayman_dma_is_lockup, .is_lockup = &si_dma_is_lockup,
.vm_flush = &si_dma_vm_flush, .vm_flush = &si_dma_vm_flush,
} }
}, },
...@@ -1944,8 +1944,12 @@ int radeon_asic_init(struct radeon_device *rdev) ...@@ -1944,8 +1944,12 @@ int radeon_asic_init(struct radeon_device *rdev)
case CHIP_TAHITI: case CHIP_TAHITI:
case CHIP_PITCAIRN: case CHIP_PITCAIRN:
case CHIP_VERDE: case CHIP_VERDE:
case CHIP_OLAND:
rdev->asic = &si_asic; rdev->asic = &si_asic;
/* set num crtcs */ /* set num crtcs */
if (rdev->family == CHIP_OLAND)
rdev->num_crtc = 2;
else
rdev->num_crtc = 6; rdev->num_crtc = 6;
break; break;
default: default:
......
...@@ -319,7 +319,7 @@ void r600_dma_semaphore_ring_emit(struct radeon_device *rdev, ...@@ -319,7 +319,7 @@ void r600_dma_semaphore_ring_emit(struct radeon_device *rdev,
bool emit_wait); bool emit_wait);
void r600_dma_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib); void r600_dma_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib);
bool r600_dma_is_lockup(struct radeon_device *rdev, struct radeon_ring *ring); bool r600_dma_is_lockup(struct radeon_device *rdev, struct radeon_ring *ring);
bool r600_gpu_is_lockup(struct radeon_device *rdev, struct radeon_ring *cp); bool r600_gfx_is_lockup(struct radeon_device *rdev, struct radeon_ring *cp);
int r600_asic_reset(struct radeon_device *rdev); int r600_asic_reset(struct radeon_device *rdev);
int r600_set_surface_reg(struct radeon_device *rdev, int reg, int r600_set_surface_reg(struct radeon_device *rdev, int reg,
uint32_t tiling_flags, uint32_t pitch, uint32_t tiling_flags, uint32_t pitch,
...@@ -422,7 +422,8 @@ int evergreen_init(struct radeon_device *rdev); ...@@ -422,7 +422,8 @@ int evergreen_init(struct radeon_device *rdev);
void evergreen_fini(struct radeon_device *rdev); void evergreen_fini(struct radeon_device *rdev);
int evergreen_suspend(struct radeon_device *rdev); int evergreen_suspend(struct radeon_device *rdev);
int evergreen_resume(struct radeon_device *rdev); int evergreen_resume(struct radeon_device *rdev);
bool evergreen_gpu_is_lockup(struct radeon_device *rdev, struct radeon_ring *cp); bool evergreen_gfx_is_lockup(struct radeon_device *rdev, struct radeon_ring *cp);
bool evergreen_dma_is_lockup(struct radeon_device *rdev, struct radeon_ring *cp);
int evergreen_asic_reset(struct radeon_device *rdev); int evergreen_asic_reset(struct radeon_device *rdev);
void evergreen_bandwidth_update(struct radeon_device *rdev); void evergreen_bandwidth_update(struct radeon_device *rdev);
void evergreen_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib); void evergreen_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib);
...@@ -473,13 +474,16 @@ int cayman_vm_init(struct radeon_device *rdev); ...@@ -473,13 +474,16 @@ int cayman_vm_init(struct radeon_device *rdev);
void cayman_vm_fini(struct radeon_device *rdev); void cayman_vm_fini(struct radeon_device *rdev);
void cayman_vm_flush(struct radeon_device *rdev, int ridx, struct radeon_vm *vm); void cayman_vm_flush(struct radeon_device *rdev, int ridx, struct radeon_vm *vm);
uint32_t cayman_vm_page_flags(struct radeon_device *rdev, uint32_t flags); uint32_t cayman_vm_page_flags(struct radeon_device *rdev, uint32_t flags);
void cayman_vm_set_page(struct radeon_device *rdev, uint64_t pe, void cayman_vm_set_page(struct radeon_device *rdev,
struct radeon_ib *ib,
uint64_t pe,
uint64_t addr, unsigned count, uint64_t addr, unsigned count,
uint32_t incr, uint32_t flags); uint32_t incr, uint32_t flags);
int evergreen_ib_parse(struct radeon_device *rdev, struct radeon_ib *ib); int evergreen_ib_parse(struct radeon_device *rdev, struct radeon_ib *ib);
int evergreen_dma_ib_parse(struct radeon_device *rdev, struct radeon_ib *ib); int evergreen_dma_ib_parse(struct radeon_device *rdev, struct radeon_ib *ib);
void cayman_dma_ring_ib_execute(struct radeon_device *rdev, void cayman_dma_ring_ib_execute(struct radeon_device *rdev,
struct radeon_ib *ib); struct radeon_ib *ib);
bool cayman_gfx_is_lockup(struct radeon_device *rdev, struct radeon_ring *ring);
bool cayman_dma_is_lockup(struct radeon_device *rdev, struct radeon_ring *ring); bool cayman_dma_is_lockup(struct radeon_device *rdev, struct radeon_ring *ring);
void cayman_dma_vm_flush(struct radeon_device *rdev, int ridx, struct radeon_vm *vm); void cayman_dma_vm_flush(struct radeon_device *rdev, int ridx, struct radeon_vm *vm);
...@@ -496,14 +500,17 @@ int si_init(struct radeon_device *rdev); ...@@ -496,14 +500,17 @@ int si_init(struct radeon_device *rdev);
void si_fini(struct radeon_device *rdev); void si_fini(struct radeon_device *rdev);
int si_suspend(struct radeon_device *rdev); int si_suspend(struct radeon_device *rdev);
int si_resume(struct radeon_device *rdev); int si_resume(struct radeon_device *rdev);
bool si_gpu_is_lockup(struct radeon_device *rdev, struct radeon_ring *cp); bool si_gfx_is_lockup(struct radeon_device *rdev, struct radeon_ring *cp);
bool si_dma_is_lockup(struct radeon_device *rdev, struct radeon_ring *cp);
int si_asic_reset(struct radeon_device *rdev); int si_asic_reset(struct radeon_device *rdev);
void si_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib); void si_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib);
int si_irq_set(struct radeon_device *rdev); int si_irq_set(struct radeon_device *rdev);
int si_irq_process(struct radeon_device *rdev); int si_irq_process(struct radeon_device *rdev);
int si_vm_init(struct radeon_device *rdev); int si_vm_init(struct radeon_device *rdev);
void si_vm_fini(struct radeon_device *rdev); void si_vm_fini(struct radeon_device *rdev);
void si_vm_set_page(struct radeon_device *rdev, uint64_t pe, void si_vm_set_page(struct radeon_device *rdev,
struct radeon_ib *ib,
uint64_t pe,
uint64_t addr, unsigned count, uint64_t addr, unsigned count,
uint32_t incr, uint32_t flags); uint32_t incr, uint32_t flags);
void si_vm_flush(struct radeon_device *rdev, int ridx, struct radeon_vm *vm); void si_vm_flush(struct radeon_device *rdev, int ridx, struct radeon_vm *vm);
......
...@@ -27,6 +27,8 @@ ...@@ -27,6 +27,8 @@
* Authors: * Authors:
* Kevin E. Martin <martin@valinux.com> * Kevin E. Martin <martin@valinux.com>
* Gareth Hughes <gareth@valinux.com> * Gareth Hughes <gareth@valinux.com>
*
* ------------------------ This file is DEPRECATED! -------------------------
*/ */
#include <linux/module.h> #include <linux/module.h>
......
...@@ -29,9 +29,6 @@ ...@@ -29,9 +29,6 @@
#include "radeon_reg.h" #include "radeon_reg.h"
#include "radeon.h" #include "radeon.h"
void r100_cs_dump_packet(struct radeon_cs_parser *p,
struct radeon_cs_packet *pkt);
static int radeon_cs_parser_relocs(struct radeon_cs_parser *p) static int radeon_cs_parser_relocs(struct radeon_cs_parser *p)
{ {
struct drm_device *ddev = p->rdev->ddev; struct drm_device *ddev = p->rdev->ddev;
...@@ -128,18 +125,6 @@ static int radeon_cs_get_ring(struct radeon_cs_parser *p, u32 ring, s32 priority ...@@ -128,18 +125,6 @@ static int radeon_cs_get_ring(struct radeon_cs_parser *p, u32 ring, s32 priority
return 0; return 0;
} }
static void radeon_cs_sync_to(struct radeon_cs_parser *p,
struct radeon_fence *fence)
{
struct radeon_fence *other;
if (!fence)
return;
other = p->ib.sync_to[fence->ring];
p->ib.sync_to[fence->ring] = radeon_fence_later(fence, other);
}
static void radeon_cs_sync_rings(struct radeon_cs_parser *p) static void radeon_cs_sync_rings(struct radeon_cs_parser *p)
{ {
int i; int i;
...@@ -148,7 +133,7 @@ static void radeon_cs_sync_rings(struct radeon_cs_parser *p) ...@@ -148,7 +133,7 @@ static void radeon_cs_sync_rings(struct radeon_cs_parser *p)
if (!p->relocs[i].robj) if (!p->relocs[i].robj)
continue; continue;
radeon_cs_sync_to(p, p->relocs[i].robj->tbo.sync_obj); radeon_ib_sync_to(&p->ib, p->relocs[i].robj->tbo.sync_obj);
} }
} }
...@@ -203,7 +188,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data) ...@@ -203,7 +188,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data)
p->chunks[i].length_dw = user_chunk.length_dw; p->chunks[i].length_dw = user_chunk.length_dw;
p->chunks[i].kdata = NULL; p->chunks[i].kdata = NULL;
p->chunks[i].chunk_id = user_chunk.chunk_id; p->chunks[i].chunk_id = user_chunk.chunk_id;
p->chunks[i].user_ptr = (void __user *)(unsigned long)user_chunk.chunk_data;
if (p->chunks[i].chunk_id == RADEON_CHUNK_ID_RELOCS) { if (p->chunks[i].chunk_id == RADEON_CHUNK_ID_RELOCS) {
p->chunk_relocs_idx = i; p->chunk_relocs_idx = i;
} }
...@@ -226,9 +211,6 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data) ...@@ -226,9 +211,6 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data)
return -EINVAL; return -EINVAL;
} }
p->chunks[i].length_dw = user_chunk.length_dw;
p->chunks[i].user_ptr = (void __user *)(unsigned long)user_chunk.chunk_data;
cdata = (uint32_t *)(unsigned long)user_chunk.chunk_data; cdata = (uint32_t *)(unsigned long)user_chunk.chunk_data;
if ((p->chunks[i].chunk_id == RADEON_CHUNK_ID_RELOCS) || if ((p->chunks[i].chunk_id == RADEON_CHUNK_ID_RELOCS) ||
(p->chunks[i].chunk_id == RADEON_CHUNK_ID_FLAGS)) { (p->chunks[i].chunk_id == RADEON_CHUNK_ID_FLAGS)) {
...@@ -478,8 +460,9 @@ static int radeon_cs_ib_vm_chunk(struct radeon_device *rdev, ...@@ -478,8 +460,9 @@ static int radeon_cs_ib_vm_chunk(struct radeon_device *rdev,
goto out; goto out;
} }
radeon_cs_sync_rings(parser); radeon_cs_sync_rings(parser);
radeon_cs_sync_to(parser, vm->fence); radeon_ib_sync_to(&parser->ib, vm->fence);
radeon_cs_sync_to(parser, radeon_vm_grab_id(rdev, vm, parser->ring)); radeon_ib_sync_to(&parser->ib, radeon_vm_grab_id(
rdev, vm, parser->ring));
if ((rdev->family >= CHIP_TAHITI) && if ((rdev->family >= CHIP_TAHITI) &&
(parser->chunk_const_ib_idx != -1)) { (parser->chunk_const_ib_idx != -1)) {
...@@ -648,3 +631,152 @@ u32 radeon_get_ib_value(struct radeon_cs_parser *p, int idx) ...@@ -648,3 +631,152 @@ u32 radeon_get_ib_value(struct radeon_cs_parser *p, int idx)
idx_value = ibc->kpage[new_page][pg_offset/4]; idx_value = ibc->kpage[new_page][pg_offset/4];
return idx_value; return idx_value;
} }
/**
* radeon_cs_packet_parse() - parse cp packet and point ib index to next packet
* @parser: parser structure holding parsing context.
* @pkt: where to store packet information
*
* Assume that chunk_ib_index is properly set. Will return -EINVAL
* if packet is bigger than remaining ib size. or if packets is unknown.
**/
int radeon_cs_packet_parse(struct radeon_cs_parser *p,
struct radeon_cs_packet *pkt,
unsigned idx)
{
struct radeon_cs_chunk *ib_chunk = &p->chunks[p->chunk_ib_idx];
struct radeon_device *rdev = p->rdev;
uint32_t header;
if (idx >= ib_chunk->length_dw) {
DRM_ERROR("Can not parse packet at %d after CS end %d !\n",
idx, ib_chunk->length_dw);
return -EINVAL;
}
header = radeon_get_ib_value(p, idx);
pkt->idx = idx;
pkt->type = RADEON_CP_PACKET_GET_TYPE(header);
pkt->count = RADEON_CP_PACKET_GET_COUNT(header);
pkt->one_reg_wr = 0;
switch (pkt->type) {
case RADEON_PACKET_TYPE0:
if (rdev->family < CHIP_R600) {
pkt->reg = R100_CP_PACKET0_GET_REG(header);
pkt->one_reg_wr =
RADEON_CP_PACKET0_GET_ONE_REG_WR(header);
} else
pkt->reg = R600_CP_PACKET0_GET_REG(header);
break;
case RADEON_PACKET_TYPE3:
pkt->opcode = RADEON_CP_PACKET3_GET_OPCODE(header);
break;
case RADEON_PACKET_TYPE2:
pkt->count = -1;
break;
default:
DRM_ERROR("Unknown packet type %d at %d !\n", pkt->type, idx);
return -EINVAL;
}
if ((pkt->count + 1 + pkt->idx) >= ib_chunk->length_dw) {
DRM_ERROR("Packet (%d:%d:%d) end after CS buffer (%d) !\n",
pkt->idx, pkt->type, pkt->count, ib_chunk->length_dw);
return -EINVAL;
}
return 0;
}
/**
* radeon_cs_packet_next_is_pkt3_nop() - test if the next packet is P3 NOP
* @p: structure holding the parser context.
*
* Check if the next packet is NOP relocation packet3.
**/
bool radeon_cs_packet_next_is_pkt3_nop(struct radeon_cs_parser *p)
{
struct radeon_cs_packet p3reloc;
int r;
r = radeon_cs_packet_parse(p, &p3reloc, p->idx);
if (r)
return false;
if (p3reloc.type != RADEON_PACKET_TYPE3)
return false;
if (p3reloc.opcode != RADEON_PACKET3_NOP)
return false;
return true;
}
/**
* radeon_cs_dump_packet() - dump raw packet context
* @p: structure holding the parser context.
* @pkt: structure holding the packet.
*
* Used mostly for debugging and error reporting.
**/
void radeon_cs_dump_packet(struct radeon_cs_parser *p,
struct radeon_cs_packet *pkt)
{
volatile uint32_t *ib;
unsigned i;
unsigned idx;
ib = p->ib.ptr;
idx = pkt->idx;
for (i = 0; i <= (pkt->count + 1); i++, idx++)
DRM_INFO("ib[%d]=0x%08X\n", idx, ib[idx]);
}
/**
* radeon_cs_packet_next_reloc() - parse next (should be reloc) packet
* @parser: parser structure holding parsing context.
* @data: pointer to relocation data
* @offset_start: starting offset
* @offset_mask: offset mask (to align start offset on)
* @reloc: reloc informations
*
* Check if next packet is relocation packet3, do bo validation and compute
* GPU offset using the provided start.
**/
int radeon_cs_packet_next_reloc(struct radeon_cs_parser *p,
struct radeon_cs_reloc **cs_reloc,
int nomm)
{
struct radeon_cs_chunk *relocs_chunk;
struct radeon_cs_packet p3reloc;
unsigned idx;
int r;
if (p->chunk_relocs_idx == -1) {
DRM_ERROR("No relocation chunk !\n");
return -EINVAL;
}
*cs_reloc = NULL;
relocs_chunk = &p->chunks[p->chunk_relocs_idx];
r = radeon_cs_packet_parse(p, &p3reloc, p->idx);
if (r)
return r;
p->idx += p3reloc.count + 2;
if (p3reloc.type != RADEON_PACKET_TYPE3 ||
p3reloc.opcode != RADEON_PACKET3_NOP) {
DRM_ERROR("No packet3 for relocation for packet at %d.\n",
p3reloc.idx);
radeon_cs_dump_packet(p, &p3reloc);
return -EINVAL;
}
idx = radeon_get_ib_value(p, p3reloc.idx + 1);
if (idx >= relocs_chunk->length_dw) {
DRM_ERROR("Relocs at %d after relocations chunk end %d !\n",
idx, relocs_chunk->length_dw);
radeon_cs_dump_packet(p, &p3reloc);
return -EINVAL;
}
/* FIXME: we assume reloc size is 4 dwords */
if (nomm) {
*cs_reloc = p->relocs;
(*cs_reloc)->lobj.gpu_offset =
(u64)relocs_chunk->kdata[idx + 3] << 32;
(*cs_reloc)->lobj.gpu_offset |= relocs_chunk->kdata[idx + 0];
} else
*cs_reloc = p->relocs_ptr[(idx / 4)];
return 0;
}
...@@ -93,6 +93,7 @@ static const char radeon_family_name[][16] = { ...@@ -93,6 +93,7 @@ static const char radeon_family_name[][16] = {
"TAHITI", "TAHITI",
"PITCAIRN", "PITCAIRN",
"VERDE", "VERDE",
"OLAND",
"LAST", "LAST",
}; };
......
...@@ -123,15 +123,25 @@ struct dma_buf *radeon_gem_prime_export(struct drm_device *dev, ...@@ -123,15 +123,25 @@ struct dma_buf *radeon_gem_prime_export(struct drm_device *dev,
int flags); int flags);
struct drm_gem_object *radeon_gem_prime_import(struct drm_device *dev, struct drm_gem_object *radeon_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf); struct dma_buf *dma_buf);
extern long radeon_kms_compat_ioctl(struct file *filp, unsigned int cmd,
unsigned long arg);
#if defined(CONFIG_DEBUG_FS) #if defined(CONFIG_DEBUG_FS)
int radeon_debugfs_init(struct drm_minor *minor); int radeon_debugfs_init(struct drm_minor *minor);
void radeon_debugfs_cleanup(struct drm_minor *minor); void radeon_debugfs_cleanup(struct drm_minor *minor);
#endif #endif
/* atpx handler */
#if defined(CONFIG_VGA_SWITCHEROO)
void radeon_register_atpx_handler(void);
void radeon_unregister_atpx_handler(void);
#else
static inline void radeon_register_atpx_handler(void) {}
static inline void radeon_unregister_atpx_handler(void) {}
#endif
int radeon_no_wb; int radeon_no_wb;
int radeon_modeset = -1; int radeon_modeset = 1;
int radeon_dynclks = -1; int radeon_dynclks = -1;
int radeon_r4xx_atom = 0; int radeon_r4xx_atom = 0;
int radeon_agpmode = 0; int radeon_agpmode = 0;
...@@ -199,6 +209,14 @@ module_param_named(msi, radeon_msi, int, 0444); ...@@ -199,6 +209,14 @@ module_param_named(msi, radeon_msi, int, 0444);
MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (defaul 10000 = 10 seconds, 0 = disable)"); MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (defaul 10000 = 10 seconds, 0 = disable)");
module_param_named(lockup_timeout, radeon_lockup_timeout, int, 0444); module_param_named(lockup_timeout, radeon_lockup_timeout, int, 0444);
static struct pci_device_id pciidlist[] = {
radeon_PCI_IDS
};
MODULE_DEVICE_TABLE(pci, pciidlist);
#ifdef CONFIG_DRM_RADEON_UMS
static int radeon_suspend(struct drm_device *dev, pm_message_t state) static int radeon_suspend(struct drm_device *dev, pm_message_t state)
{ {
drm_radeon_private_t *dev_priv = dev->dev_private; drm_radeon_private_t *dev_priv = dev->dev_private;
...@@ -227,14 +245,6 @@ static int radeon_resume(struct drm_device *dev) ...@@ -227,14 +245,6 @@ static int radeon_resume(struct drm_device *dev)
return 0; return 0;
} }
static struct pci_device_id pciidlist[] = {
radeon_PCI_IDS
};
#if defined(CONFIG_DRM_RADEON_KMS)
MODULE_DEVICE_TABLE(pci, pciidlist);
#endif
static const struct file_operations radeon_driver_old_fops = { static const struct file_operations radeon_driver_old_fops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.open = drm_open, .open = drm_open,
...@@ -284,6 +294,8 @@ static struct drm_driver driver_old = { ...@@ -284,6 +294,8 @@ static struct drm_driver driver_old = {
.patchlevel = DRIVER_PATCHLEVEL, .patchlevel = DRIVER_PATCHLEVEL,
}; };
#endif
static struct drm_driver kms_driver; static struct drm_driver kms_driver;
static int radeon_kick_out_firmware_fb(struct pci_dev *pdev) static int radeon_kick_out_firmware_fb(struct pci_dev *pdev)
...@@ -411,10 +423,12 @@ static struct drm_driver kms_driver = { ...@@ -411,10 +423,12 @@ static struct drm_driver kms_driver = {
static struct drm_driver *driver; static struct drm_driver *driver;
static struct pci_driver *pdriver; static struct pci_driver *pdriver;
#ifdef CONFIG_DRM_RADEON_UMS
static struct pci_driver radeon_pci_driver = { static struct pci_driver radeon_pci_driver = {
.name = DRIVER_NAME, .name = DRIVER_NAME,
.id_table = pciidlist, .id_table = pciidlist,
}; };
#endif
static struct pci_driver radeon_kms_pci_driver = { static struct pci_driver radeon_kms_pci_driver = {
.name = DRIVER_NAME, .name = DRIVER_NAME,
...@@ -427,28 +441,6 @@ static struct pci_driver radeon_kms_pci_driver = { ...@@ -427,28 +441,6 @@ static struct pci_driver radeon_kms_pci_driver = {
static int __init radeon_init(void) static int __init radeon_init(void)
{ {
driver = &driver_old;
pdriver = &radeon_pci_driver;
driver->num_ioctls = radeon_max_ioctl;
#ifdef CONFIG_VGA_CONSOLE
if (vgacon_text_force() && radeon_modeset == -1) {
DRM_INFO("VGACON disable radeon kernel modesetting.\n");
driver = &driver_old;
pdriver = &radeon_pci_driver;
driver->driver_features &= ~DRIVER_MODESET;
radeon_modeset = 0;
}
#endif
/* if enabled by default */
if (radeon_modeset == -1) {
#ifdef CONFIG_DRM_RADEON_KMS
DRM_INFO("radeon defaulting to kernel modesetting.\n");
radeon_modeset = 1;
#else
DRM_INFO("radeon defaulting to userspace modesetting.\n");
radeon_modeset = 0;
#endif
}
if (radeon_modeset == 1) { if (radeon_modeset == 1) {
DRM_INFO("radeon kernel modesetting enabled.\n"); DRM_INFO("radeon kernel modesetting enabled.\n");
driver = &kms_driver; driver = &kms_driver;
...@@ -456,9 +448,21 @@ static int __init radeon_init(void) ...@@ -456,9 +448,21 @@ static int __init radeon_init(void)
driver->driver_features |= DRIVER_MODESET; driver->driver_features |= DRIVER_MODESET;
driver->num_ioctls = radeon_max_kms_ioctl; driver->num_ioctls = radeon_max_kms_ioctl;
radeon_register_atpx_handler(); radeon_register_atpx_handler();
} else {
#ifdef CONFIG_DRM_RADEON_UMS
DRM_INFO("radeon userspace modesetting enabled.\n");
driver = &driver_old;
pdriver = &radeon_pci_driver;
driver->driver_features &= ~DRIVER_MODESET;
driver->num_ioctls = radeon_max_ioctl;
#else
DRM_ERROR("No UMS support in radeon module!\n");
return -EINVAL;
#endif
} }
/* if the vga console setting is enabled still
* let modprobe override it */ /* let modprobe override vga console setting */
return drm_pci_init(driver, pdriver); return drm_pci_init(driver, pdriver);
} }
......
...@@ -113,6 +113,9 @@ ...@@ -113,6 +113,9 @@
#define DRIVER_MINOR 33 #define DRIVER_MINOR 33
#define DRIVER_PATCHLEVEL 0 #define DRIVER_PATCHLEVEL 0
/* The rest of the file is DEPRECATED! */
#ifdef CONFIG_DRM_RADEON_UMS
enum radeon_cp_microcode_version { enum radeon_cp_microcode_version {
UCODE_R100, UCODE_R100,
UCODE_R200, UCODE_R200,
...@@ -418,8 +421,6 @@ extern int radeon_driver_open(struct drm_device *dev, ...@@ -418,8 +421,6 @@ extern int radeon_driver_open(struct drm_device *dev,
struct drm_file *file_priv); struct drm_file *file_priv);
extern long radeon_compat_ioctl(struct file *filp, unsigned int cmd, extern long radeon_compat_ioctl(struct file *filp, unsigned int cmd,
unsigned long arg); unsigned long arg);
extern long radeon_kms_compat_ioctl(struct file *filp, unsigned int cmd,
unsigned long arg);
extern int radeon_master_create(struct drm_device *dev, struct drm_master *master); extern int radeon_master_create(struct drm_device *dev, struct drm_master *master);
extern void radeon_master_destroy(struct drm_device *dev, struct drm_master *master); extern void radeon_master_destroy(struct drm_device *dev, struct drm_master *master);
...@@ -462,15 +463,6 @@ extern void r600_blit_swap(struct drm_device *dev, ...@@ -462,15 +463,6 @@ extern void r600_blit_swap(struct drm_device *dev,
int sx, int sy, int dx, int dy, int sx, int sy, int dx, int dy,
int w, int h, int src_pitch, int dst_pitch, int cpp); int w, int h, int src_pitch, int dst_pitch, int cpp);
/* atpx handler */
#if defined(CONFIG_VGA_SWITCHEROO)
void radeon_register_atpx_handler(void);
void radeon_unregister_atpx_handler(void);
#else
static inline void radeon_register_atpx_handler(void) {}
static inline void radeon_unregister_atpx_handler(void) {}
#endif
/* Flags for stats.boxes /* Flags for stats.boxes
*/ */
#define RADEON_BOX_DMA_IDLE 0x1 #define RADEON_BOX_DMA_IDLE 0x1
...@@ -2167,4 +2159,6 @@ extern void radeon_commit_ring(drm_radeon_private_t *dev_priv); ...@@ -2167,4 +2159,6 @@ extern void radeon_commit_ring(drm_radeon_private_t *dev_priv);
} while (0) } while (0)
#endif /* CONFIG_DRM_RADEON_UMS */
#endif /* __RADEON_DRV_H__ */ #endif /* __RADEON_DRV_H__ */
...@@ -91,6 +91,7 @@ enum radeon_family { ...@@ -91,6 +91,7 @@ enum radeon_family {
CHIP_TAHITI, CHIP_TAHITI,
CHIP_PITCAIRN, CHIP_PITCAIRN,
CHIP_VERDE, CHIP_VERDE,
CHIP_OLAND,
CHIP_LAST, CHIP_LAST,
}; };
......
This diff is collapsed.
...@@ -28,6 +28,8 @@ ...@@ -28,6 +28,8 @@
* Authors: * Authors:
* Keith Whitwell <keith@tungstengraphics.com> * Keith Whitwell <keith@tungstengraphics.com>
* Michel D�zer <michel@daenzer.net> * Michel D�zer <michel@daenzer.net>
*
* ------------------------ This file is DEPRECATED! -------------------------
*/ */
#include <drm/drmP.h> #include <drm/drmP.h>
......
...@@ -27,6 +27,8 @@ ...@@ -27,6 +27,8 @@
* *
* Authors: * Authors:
* Keith Whitwell <keith@tungstengraphics.com> * Keith Whitwell <keith@tungstengraphics.com>
*
* ------------------------ This file is DEPRECATED! -------------------------
*/ */
#include <drm/drmP.h> #include <drm/drmP.h>
......
...@@ -3706,4 +3706,19 @@ ...@@ -3706,4 +3706,19 @@
#define RV530_GB_PIPE_SELECT2 0x4124 #define RV530_GB_PIPE_SELECT2 0x4124
#define RADEON_CP_PACKET_GET_TYPE(h) (((h) >> 30) & 3)
#define RADEON_CP_PACKET_GET_COUNT(h) (((h) >> 16) & 0x3FFF)
#define RADEON_CP_PACKET0_GET_ONE_REG_WR(h) (((h) >> 15) & 1)
#define RADEON_CP_PACKET3_GET_OPCODE(h) (((h) >> 8) & 0xFF)
#define R100_CP_PACKET0_GET_REG(h) (((h) & 0x1FFF) << 2)
#define R600_CP_PACKET0_GET_REG(h) (((h) & 0xFFFF) << 2)
#define RADEON_PACKET_TYPE0 0
#define RADEON_PACKET_TYPE1 1
#define RADEON_PACKET_TYPE2 2
#define RADEON_PACKET_TYPE3 3
#define RADEON_PACKET3_NOP 0x10
#define RADEON_VLINE_STAT (1 << 12)
#endif #endif
...@@ -108,6 +108,25 @@ void radeon_ib_free(struct radeon_device *rdev, struct radeon_ib *ib) ...@@ -108,6 +108,25 @@ void radeon_ib_free(struct radeon_device *rdev, struct radeon_ib *ib)
radeon_fence_unref(&ib->fence); radeon_fence_unref(&ib->fence);
} }
/**
* radeon_ib_sync_to - sync to fence before executing the IB
*
* @ib: IB object to add fence to
* @fence: fence to sync to
*
* Sync to the fence before executing the IB
*/
void radeon_ib_sync_to(struct radeon_ib *ib, struct radeon_fence *fence)
{
struct radeon_fence *other;
if (!fence)
return;
other = ib->sync_to[fence->ring];
ib->sync_to[fence->ring] = radeon_fence_later(fence, other);
}
/** /**
* radeon_ib_schedule - schedule an IB (Indirect Buffer) on the ring * radeon_ib_schedule - schedule an IB (Indirect Buffer) on the ring
* *
......
...@@ -25,6 +25,8 @@ ...@@ -25,6 +25,8 @@
* Authors: * Authors:
* Gareth Hughes <gareth@valinux.com> * Gareth Hughes <gareth@valinux.com>
* Kevin E. Martin <martin@valinux.com> * Kevin E. Martin <martin@valinux.com>
*
* ------------------------ This file is DEPRECATED! -------------------------
*/ */
#include <drm/drmP.h> #include <drm/drmP.h>
......
...@@ -205,17 +205,6 @@ ...@@ -205,17 +205,6 @@
REG_SET(PACKET3_IT_OPCODE, (op)) | \ REG_SET(PACKET3_IT_OPCODE, (op)) | \
REG_SET(PACKET3_COUNT, (n))) REG_SET(PACKET3_COUNT, (n)))
#define PACKET_TYPE0 0
#define PACKET_TYPE1 1
#define PACKET_TYPE2 2
#define PACKET_TYPE3 3
#define CP_PACKET_GET_TYPE(h) (((h) >> 30) & 3)
#define CP_PACKET_GET_COUNT(h) (((h) >> 16) & 0x3FFF)
#define CP_PACKET0_GET_REG(h) (((h) & 0x1FFF) << 2)
#define CP_PACKET0_GET_ONE_REG_WR(h) (((h) >> 15) & 1)
#define CP_PACKET3_GET_OPCODE(h) (((h) >> 8) & 0xFF)
/* Registers */ /* Registers */
#define R_0000F0_RBBM_SOFT_RESET 0x0000F0 #define R_0000F0_RBBM_SOFT_RESET 0x0000F0
#define S_0000F0_SOFT_RESET_CP(x) (((x) & 0x1) << 0) #define S_0000F0_SOFT_RESET_CP(x) (((x) & 0x1) << 0)
......
This diff is collapsed.
This diff is collapsed.
...@@ -139,6 +139,19 @@ ...@@ -139,6 +139,19 @@
{0x1002, 0x5e4c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV410|RADEON_NEW_MEMMAP}, \ {0x1002, 0x5e4c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV410|RADEON_NEW_MEMMAP}, \
{0x1002, 0x5e4d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV410|RADEON_NEW_MEMMAP}, \ {0x1002, 0x5e4d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV410|RADEON_NEW_MEMMAP}, \
{0x1002, 0x5e4f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV410|RADEON_NEW_MEMMAP}, \ {0x1002, 0x5e4f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV410|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6600, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6601, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6602, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6603, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6606, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6607, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6610, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6611, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6613, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6620, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6621, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6623, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6631, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6700, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAYMAN|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6700, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAYMAN|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6701, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAYMAN|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6701, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAYMAN|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6702, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAYMAN|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6702, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAYMAN|RADEON_NEW_MEMMAP}, \
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment